15 research outputs found
Improved Algorithms for Time Decay Streams
In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a coreset, a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions.
We also consider the exponential time decay model for k-median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores O(k log(h Delta)+h) points where h is the half-life of the decay function and Delta is the aspect ratio of the dataset. Our techniques extend to k-means clustering and M-estimators as well
From Adaptive Query Release to Machine Unlearning
We formalize the problem of machine unlearning as design of efficient
unlearning algorithms corresponding to learning algorithms which perform a
selection of adaptive queries from structured query classes. We give efficient
unlearning algorithms for linear and prefix-sum query classes. As applications,
we show that unlearning in many problems, in particular, stochastic convex
optimization (SCO), can be reduced to the above, yielding improved guarantees
for the problem. In particular, for smooth Lipschitz losses and any ,
our results yield an unlearning algorithm with excess population risk of
with unlearning
query (gradient) complexity , where is the model dimensionality and is the initial
number of samples. For non-smooth Lipschitz losses, we give an unlearning
algorithm with excess population risk with the
same unlearning query (gradient) complexity. Furthermore, in the special case
of Generalized Linear Models (GLMs), such as those in linear and logistic
regression, we get dimension-independent rates of and for smooth Lipschitz
and non-smooth Lipschitz losses respectively. Finally, we give generalizations
of the above from one unlearning request to \textit{dynamic} streams consisting
of insertions and deletions.Comment: Accepted to ICML 202
Private Federated Learning with Autotuned Compression
We propose new techniques for reducing communication in private federated
learning without the need for setting or tuning compression rates. Our
on-the-fly methods automatically adjust the compression rate based on the error
induced during training, while maintaining provable privacy guarantees through
the use of secure aggregation and differential privacy. Our techniques are
provably instance-optimal for mean estimation, meaning that they can adapt to
the ``hardness of the problem" with minimal interactivity. We demonstrate the
effectiveness of our approach on real-world datasets by achieving favorable
compression rates without the need for tuning.Comment: Accepted to ICML 202
Recommended from our members
A COMPARISON OF ART PROGRAM EVALUATION RATINGS BY SELF AND VISITING TEAMS IN SELECTED NORTH CENTRAL ASSOCIATION HIGH SCHOOLS (ARIZONA).
The investigation was concerned with 31 Arizona high schools which were members of the North Central Association of Colleges and Schools and which had employed the "Evaluative Criteria," 5th Edition to evaluate themselves and which also had a visiting North Central Team evaluation. The study focused on the art section of each school's evaluation. It sought to ascertain differences in ratings between the schools' self-evaluation and the ratings of the visiting North Central Teams on the 31 evaluation items of the Art Section. A theoretical framework of six categories was constructed. This included: Fundamentals, Self-Actualization, Evaluation, Progress, Support, and General Evaluation. Related literature was reviewed for each category to provide a backdrop of concepts with which to consider the collected data. Each of the 31 evaluation items was subsumed under one or another of these categories for systematic study and reporting. The data were analyzed by use of the product-moment correlation coefficient, and the obtained indices of relationship were examined for their significance at the .05 alpha level. A second analysis involving the statistics was applied to the two sets of obtained ratings across the six categories. The factors of school size, i.e., small and large schools, and geographic location, i.e., rural and urban, were also considered statistically. It was found that there was a significant positive set of relationships across the six categories of the theoretical framework between how the schools rated themselves in the art area and how the visiting North Central teams rated the schools in the art area. School size appeared to be a factor in only two of the six categories, i.e., Self-Evaluation and General Evaluation, while geographic location appeared to be a factor in the three categories of Self-Actualization, Evaluation, and Support
Recommended from our members
Art education in Afghanistan
This item was digitized from a paper original and/or a microfilm copy. If you need higher-resolution images for any content in this item, please contact us at [email protected]
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration
RMSProp and ADAM continue to be extremely popular algorithms for training
neural nets but their theoretical convergence properties have remained unclear.
Further, recent work has seemed to suggest that these algorithms have worse
generalization properties when compared to carefully tuned stochastic gradient
descent or its momentum variants. In this work, we make progress towards a
deeper understanding of ADAM and RMSProp in two ways. First, we provide proofs
that these adaptive gradient algorithms are guaranteed to reach criticality for
smooth non-convex objectives, and we give bounds on the running time.
Next we design experiments to empirically study the convergence and
generalization properties of RMSProp and ADAM against Nesterov's Accelerated
Gradient method on a variety of common autoencoder setups and on VGG-9 with
CIFAR-10. Through these experiments we demonstrate the interesting sensitivity
that ADAM has to its momentum parameter . We show that at very high
values of the momentum parameter () ADAM outperforms a
carefully tuned NAG on most of our experiments, in terms of getting lower
training and test losses. On the other hand, NAG can sometimes do better when
ADAM's is set to the most commonly used value: ,
indicating the importance of tuning the hyperparameters of ADAM to get better
generalization performance.
We also report experiments on different autoencoders to demonstrate that NAG
has better abilities in terms of reducing the gradient norms, and it also
produces iterates which exhibit an increasing trend for the minimum eigenvalue
of the Hessian of the loss function at the iterates.Comment: Presented on 14th July 2018 at the ICML Workshop on Modern Trends in
Nonconvex Optimization for Machine Learning. In this version, we have made
changes to the setup of our Theorem 3.1, and added additional experimental
result